Goto

Collaborating Authors

 chinese checker


Set-Based Retrograde Analysis: Precomputing the Solution to 24-card Bridge Double Dummy Deals

Stone, Isaac, Sturtevant, Nathan R., Schaeffer, Jonathan

arXiv.org Artificial Intelligence

Retrograde analysis is used in game-playing programs to solve states at the end of a game, working backwards toward the start of the game. The algorithm iterates through and computes the perfect-play value for as many states as resources allow. We introduce setrograde analysis which achieves the same results by operating on sets of states that have the same game value. The algorithm is demonstrated by computing exact solutions for Bridge double dummy card-play. For deals with 24 cards remaining to be played ( 10 27 states, which can be reduced to 10 15 states using preexisting techniques), we strongly solve all deals. The setrograde algorithm performs a factor of 10 3 fewer search operations than a standard retrograde algorithm, producing a database with a factor of 10 4 fewer entries. For applicable domains, this allows retrograde searching to reach unprecedented search depths. 1 Introduction Some of the early high-performance game-playing programs relied on retrograde analysis and endgame databases for strong play. The most notable example is Checkers, where 39 trillion endgame positions, all those with 10 or fewer pieces, were used as part of the C HINOOK program (Scha-effer et al. 1992), and for solving Checkers (Schaeffer et al. 2007). Endgame databases are also used widely in Chess programs (Chess 2024), as well as in many other games (e.g., for solving A wari (Romein and Bal 2003)). Endgame databases are most effective in games where there are far fewer positions at the end of the game than elsewhere. As a result, they have not been applied in games that do not have this property. For instance, Sturtevant (2003) noted that in 3-player Chinese Checkers a winning arrangement of a single player's pieces in the game has approximately 10 23 possible permutations of the other player's pieces, making it infeasible to store all the variations of even a single winning configuration. While in Chinese Checkers each player has a unique endgame configuration (the other side's piece locations are irrelevant), in Go the locations of both side's pieces in a terminal state are important. Hence these games require significantly different analysis (Berlekamp and Wolfe 1994). In a 4-player trick-based card game such as Bridge, the last two tricks have null 52 2 nullnull 50 2 nullnull 48 2 nullnull 46 2 null = 1 . However, there are only 16 ways for each deal to play out, meaning it is trivial to solve but storing all states (as done in Checkers) is difficult.


Efficient Learning in Chinese Checkers: Comparing Parameter Sharing in Multi-Agent Reinforcement Learning

Adhikari, Noah, Gu, Allen

arXiv.org Artificial Intelligence

We show that multi-agent reinforcement learning (MARL) with full parameter sharing outperforms independent and partially shared architectures in the competitive perfect-information homogenous game of Chinese Checkers. To run our experiments, we develop a new MARL environment: variable-size, six-player Chinese Checkers. This custom environment was developed in PettingZoo and supports all traditional rules of the game including chaining jumps. This is, to the best of our knowledge, the first implementation of Chinese Checkers that remains faithful to the true game. Chinese Checkers is difficult to learn due to its large branching factor and potentially infinite horizons. We borrow the concept of branching actions (submoves) from complex action spaces in other RL domains, where a submove may not end a player's turn immediately. This drastically reduces the dimensionality of the action space. Our observation space is inspired by AlphaGo with many binary game boards stacked in a 3D array to encode information. The PettingZoo environment, training and evaluation logic, and analysis scripts can be found on \href{https://github.com/noahadhikari/pettingzoo-chinese-checkers}{Github}.


Towards Understanding Chinese Checkers with Heuristics, Monte Carlo Tree Search, and Deep Reinforcement Learning

Liu, Ziyu, Zhou, Meng, Cao, Weiqing, Qu, Qiang, Yeung, Henry Wing Fung, Chung, Vera Yuk Ying

arXiv.org Machine Learning

The game of Chinese Checkers is a challenging traditional board game of perfect information that differs from other traditional games in two main aspects: first, unlike Chess, all checkers remain indefinitely in the game and hence the branching factor of the search tree does not decrease as the game progresses; second, unlike Go, there are also no upper bounds on the depth of the search tree since repetitions and backward movements are allowed. Therefore, even in a restricted game instance, the state-space of the game can still be unbounded, making it challenging for a computer program to excel. In this work, we present an approach that effectively combines the use of heuristics, Monte Carlo tree search, and deep reinforcement learning for building a Chinese Checkers agent without the use of any human game-play data. Experiment results show that our agent is competent under different scenarios and reaches the level of experienced human players.